While some may dismiss the possibility of the electorate becoming more rational in its political decision-making, the idea that citizens are inherently irrational is a myth.
Your counterargument in no way engages with the most important parts of, say, Bryan Caplan’s argument for voter irrationality, which is that on the margin voters gain nothing by voting well and pay nothing for voting according to their emotions, whereas voting well requires costly investments of time and attention.
It’s possible that people could, by investing more energy and time into the voting process, vote better. But why would we believe that’s the right thing for people to do, on an individual level?
Yup, agreed that it may well be not worthwhile for voters who vote for reasons that are not oriented toward the most social good to vote rationally. This is why I say this is a project informed by EA values—it comes from the perspective that voting is like donating thousands of dollars to charity. For those who are purely self-interested, it’s really not rational to vote.
So to be clear, it’s not meant to target those who don’t care about the public good—just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and some of those folks would genuinely want the most good. The project is not meant to reach all, in other words—just that select slice.
I would like to caution commenters that it seems to me like the comment section of this post is at risk of becoming an object-level political argument of the sort specifically proscribed by the rules. This may be amplified by the temporary disablement of downvoting because it means that controversial comments—those which would normally have a high sum, but a small difference, of upvotes+downvotes—will tend to rise to the top.
I would suggest spending extra time looking for insightful comments that are not object-level political arguments to upvote, and consciously avoiding upvoting object-level political arguments that you agree with.
I would like to caution commenters that it seems to me like the comment section of this post is at risk of becoming an object-level political argument of the sort specifically proscribed by the rules.
And the (false) object level political statements in the OP aren’t proscribed by the rules?
Well I mean let him have his jab, the point of the site is to rise above too much unproductive object-level stuff. For an unproductive debate to be avoided, someone usually has to rise above it. Let that person be you.
I wouldn’t go ad-hominem against Gleb_Tsipursky, ad-hominem is never a good idea. He seems to me to be an earnest debater with a lot to offer, but even if he wasn’t I would still attack his arguments rather than attack the man.
It’s still ad-hominem to bring it up as an argument, and probably counterproductive to try and follow the guy around bringing up his mistakes everywhere he goes. Unless he is doing something directly connected with those criticisms.
Those criticisms are extremely relevant here. We aren’t having an abstract debate about rationality in politics, we are commenting on a post which announced a new project led by Gleb Tsipursky to try to bring rationality to politics. If you want to predict what this project is likely to end up doing, or how successful it will be, then one of the most relevant pieces of information that you have is Gleb’s track record.
Yeah I mean cargo cult rationality is definitely a risk. But still, it’s better to sink the argument than chase the guy, and as I said in a ‘no political debate’ setting, someone has to rise above it a bit.
I have actually been thinking about posting about politics here, I think there are interesting things going on in our politics but people making personal attacks against other commenters makes it harder to have a good debate.
Well if you have to choose between attacking someone’s object-level arguments about politics or attacking their person, I would say the latter is a greater evil even when the topic is controversial.
In the comments to this post I would avoid both, it’s reasonable to agree to disagree or just take the argument to PMs or something, or maybe have a special ‘politics’ thread. I mean you can even say “I disagree with X and took it to PMs” to avoid giving the impression that his assertion was unchallenged.
“I disagree with X and took it to PMs” to avoid giving the impression that his assertion was unchallenged.
What would be the point of that. To convince the other guy to see his mistakes? That only works if the person you’re debating is well meaning and exceptionally rational.
Otherwise, the point of debating in public is so that observers can see for themselves who’s being rational.
Yeah but that kind of debating tends to massively incentivize techniques for sophistry, leads to long pointless debates that take up time and yield no new knowledge. Here on LW we aim higher than that, and that is why there are norms to try and prevent it.
Meta-level discussion is never intended to “rise above” ground-level politics—that is indeed an illusion as you say. Rather, it’s intended to side-step the former temporarily, while still being useful by creating better frameworks for deliberation, mediation and similar good practices. It’s very important to understand this—any talk of “rising above” the actual, real-world issues is illusory and potentially dangerous.
Rather, it’s intended to side-step the former temporarily, while still being useful by creating better frameworks for deliberation, mediation and similar good practices
The problem is that it’s frequently used as an attempt to reach conclusions while side-stepping the whole messy “looking at the facts on the ground” thing.
I am suggesting that people comment and vote a certain way; I don’t have control over the moderators, who would be the ones with the power to do anything about the post itself. Let me explicitly state that I think people probably shouldn’t upvote this post, even if it contains things they like, if they think (as I do) that it will promote and increase object-level political discussion. Unfortunately, because downvoting is disabled (for unrelated good reasons I support), it’s hard for me and others to formally note disapproval (by downvoting) of the politics, so I’m asking other people to try to avoid noting (in the upvote sense) approval of it unless they really mean it.
I am a moderator here. I have been contemplating removing this post for most of the time since I saw it. I am still indecisive. Feel free to comment here or PM me with your preferences.
I wish I had a strong opinion to offer on this question. I think there are probably good arguments both for and against removing the post. I wish you luck in your difficult role.
I think object-level political discussion is undesirable for LW (at least at this time), but that’s quite tangential to this post, which is mostly about meta-level issues—and I consider these to be quite important. I think the post should stay.
I think this post is an interesting idea, but I feel like it’s built on some shaky assumptions.
As an instructive example, suppose (and let me DISCLAIM THAT I AM NOT ENDORSING THIS VIEW) that you are a racist, believe that people of certain races are objectively bad for a country because they have lower IQ and are more likely to commit crimes and furthermore you want fewer (or none) of them in your country, and you want to use political means to achieve this. Then Obama is elected. Now you have two options:
1) forthrightly and honestly state that you are a racist, form a pro-racism political group, perhaps even call it the “Racist Party”, and campaign to impeach Omaba for being black.
2) make some shit up about Obama being a member of ISIS and claim he wasn’t born in the USA.
Before we go on, NOTE AGAIN THAT I AM NOT ENDORSING THIS VIEW I AM ONLY USING IT AS AN EXAMPLE.
So, it seems fairly clear that option (1) is not really the rational choice. You would be doxxed, hounded down on social media, almost certainly lose your job. Others who agreed with you wouldn’t support you, because they would fear that the doxxing and firing and public shaming would be directed against them. So you choose (2), you get involved in the “birther” movement or whatever.
Now Gleb comes along and says “Hey Guys! Exciting news from the world of theoretical rationality!”
pollution of the truth will devastate all of us in a tragedy of the commons.
Wise decision-making by the citizenry is beneficial to all but a few interest groups devoted to deceiving the public.
Or to take a different example, suppose you think that it would kind of suck for your country to lose all its industry in order to fight global warming, whilst other countries continue to pollute, and suppose you also really don’t want to lose your job at a factory and have your life ruined right now for the benefit of anonymous foreigners 20-70 years in the future. You have two choices:
1) Forthrightly and honestly state that you don’t really care enough about some marginal third-world foreigner getting killed in a slightly-more-severe-than-average drought to lose your job and livelihood for them right now. Make a political party called the “Procrastinate the Environment Party”, and argue that Global Warming is totally legit, but we can probably get away with procrastinating it. Also limiting CO2 emissions kind of like a game of chicken with other countries, so from a game theory point of view it makes sense to keep polluting and see whether they crack first.
2) make up some shit about Global Warming being a liberal hoax
Or another example, suppose you belief that Global Warming is a hoax but questioning the word of certified expert scientists(tm) is not allowed in your social circle, so you come up with an elaborate meta-political explanation for why one can support the people calling Global Warming a hoax without believing it.
Yup, agreed that it may well be not wise for those who have racist beliefs to be open about them. The same applies to the global warming stuff.
This is why I say this is a project informed by EA values—it comes from the perspective that voting is like donating thousands of dollars to charity and that voters care about the public good. It’s not meant to target those who don’t care about the public good—just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and some of those folks would genuinely want the most good. The project is not meant to reach all, in other words—just that select slice.
In particular, you’re not interested in reaching the voters who don’t want say Muslim migrants raping and occasionally murdering girls in their neighborhoods. Good to know.
Well, there is a more serious flaw than that particular issue: if you reach out to a very small slice of humans in our world and persuade them that they should be more rational in politics, politics will not get more rational. You have to appeal to everyone or almost everyone.
So, for example, people who read Breitbart have to be on board, as well as people who read the guardian and the daily kos.
Yup, agreed that it may well be not wise for those who have racist beliefs to be open about them … This is why I say this is a project informed by EA values … not meant to target those who don’t care about the public good
explicitly non-partisan effort
that wise decision-making by the citizenry is beneficial to all but a few interest groups devoted to deceiving the public… much more amenable to solution than partisan issues that only affect one side of the political spectrum.
I feel like these requirements are kind of contradictory. What if a lot of people are selfish, racist (in a broad sense) and want to procrastinate global warming? Are we saying that “rational” political debate benefits them, or that it doesn’t? If it doesn’t benefit them then why should they be on board?
Are we saying that you have to be a globalist effective altruist who puts the needs of distant strangers above those of their own families to benefit from rational politics? Very few people have values like that! IIRC even Peter Singer struggled with that!
who don’t care about the public good
Do you have to care about the public good of the whole world, or is it OK if you only care about the public good of your tribe/country/race?
I’m talking about prioritizing the good of the country as a whole, not necessarily distant strangers—although in my personal value stance, that would be nice. Like I said, it’s an EA project :-)
A political group composed only of people who prioritize the good of the country over their own subtribe or self will lack the support needed to flourish.
It’s not that people disagree or don’t know about the object level facts. It’s that people are actively fighting to gain relative advantage over others. And that is a cultural problem, not a political one.
As an instructive example, suppose (and let me DISCLAIM THAT I AM NOT ENDORSING THIS VIEW) that you are a racist, believe that people of certain races are objectively bad for a country because they have lower IQ and are more likely to commit crimes …
I realize that you have disclaimed any endorsement of this view, but some people might accidentally get the wrong idea of what “lower IQ” and “more likely to commit crimes” actually mean in these contexts, in the real world. Men have a “higher propensity to commit crimes” compared to women, and we don’t call them “objectively bad” for this. (Well, maybe some radfems do, actually. Not really sure about that!) People with a mere BA-level education have “lower IQ” compared to people with multiple PhD’s, and similarly, we don’t think that BA-holders are bad. In other words, to even treat this as if it were a colorable argument reveals a basic failure of rationality. I wouldn’t care about this usually, but this whole post is about making politics more rational, and pointing out these things seems like a good place to start.
Men have a “higher propensity to commit crimes” compared to women, and we don’t call them “objectively bad” for this. (Well, maybe some radfems do, actually. Not really sure about that!)
I looked for “Women are better than men” on Google, and I found a debate at debate.org debate which cited less crime as a reason that women are better then men.
Women are better. SO many reasons why: 14. It is less likely for a woman to be a serial killer, pedophile or rapist.
Guess what? women are actually superior to men! Here’s the score … women generally seem to have higher social and emotional intelligence than men, are less violent and aggressive, are almost never serial killers
Um, either the folks at debate.org and Psychology Today are secretly radfems, or I need to seriously update here. OK, I’m definitely re-assessing how common this line of argument (“Group X should be regarded as better than/superior to group Y, because of a slight difference in the average level of some psychological trait, such as propensity to commit crimes”) is in the real world. Thanks!
Men have a “higher propensity to commit crimes” compared to women, and we don’t call them “objectively bad” for this.
On the other hand we do have a “violence against women” act, and a whole section of the justice department dedicated to crimes committed by men against women.
So the point I am making is that we humans have set up a political system where “The Racist Party” and the “Procrastinate the Environment Party” are super-duper not allowed. If the incentives against espousing the view that you actually hold are much more severe than the incentives against trying to mess with epistemology so that you can take the action you wanted without having to reveal your “forbidden” view, then it makes sense to mess with epistemology.
And the reason that we can’t have a world where people lose their jobs and get doxxed and publicly shamed for breaking epistemological rules is that basically everyone does it. Dark side epistemology is a convergent instrumental goal in politics, so it’s really hard to build a coalition against it. And if you did have a system for enforcing epistemological standards, the first thing everyone would think is “Wow that’s great, how can I subvert this system to further my object-level preferences?”
Um, Breitbart news is hardly a credible site to use to attack Politifact. Besides, that citations also had Washington Post and The New York Times—do you call them fake news as well?
Washington Post and The New York Times—do you call them fake news as well?
Yes, as I mentioned in my other comment. Now care to explain why you cited them in an article supposedly devoted to opposing fake news. Or is your definition of “fake news”, news that contradicts something written in an “official true news source” as opposed to something that contradicts reality?
I think the real problem here is that the pejorative term “Fake news” is succumbing to an effect I mentioned down in the comments earlier.
if you did have a system for enforcing epistemological standards, the first thing everyone would think is “Wow that’s great, how can I subvert this system to further my object-level preferences?”
Now that’s not to say that there isn’t a real phenomenon of inaccurate or totally fabricated news, but I think the term “Fake News” (“Fake News Outlet”) particularly lends itself to being used as a bludgeon to attack the other side, because it’s really satisfying to catch one inaccurate story on a site you disagree with and thereby write the rest of the content off as “Fake News”.
Such information often comes from the quickly-growing number of fake news sources.
Um, fake news sources, like the New York Times have existed for at least a century and probably for as long as news existed. If anything is different in 2016 it’s that it’s becoming easier to check them and find out that their false.
Without intervention, these outcomes will most likely grow worse over time, as future politicians learn from the results of the 2016 election season and double down on this strategy of lies and manipulation.
Um, the candidate of lies and manipulation lost.
In this case, the commonly-shared resource is trust in our political system and a basic expectation of truth-telling, together with a strong expectation that politicians will back away from lies when called out. We have seen this resource gobbled up in the 2016 election season by the Trump campaign.
Um, I thick your confused here. It was the Clinton campaign that was doing things like encouraging BLM with misleading statistics and outright lies.
It’s probably counterproductive to discuss object level politics. I kind of half-agree with you actually, but still, I can imagine this comment thread turning into an unproductive one.
I think it suffices to say that Gleb_Tsipursky has something of an anti-Trump political angle in the post (which may or may not be objectively correct), and agree to disagree as much as is possible on the object level.
It’s probably counterproductive to discuss object level politics.
So how do you propose to make politics more rational and better correspond to truth without discussing which policies are in fact rational and which political statements are true?
I think it suffices to say that Gleb_Tsipursky has something of an anti-Trump political angle in the post (which may or may not be objectively correct), and agree to disagree as much as is possible on the object level.
Well, I mean you could argue that he is looking for a Trump supporter who likes good epistemology to join his team so that their biases would cancel out?
But you’re right, it’s shaky ground.
So how do you propose to make politics more rational and better correspond to truth without discussing which policies are in fact rational and which political statements are true?
I’d probably start by making a new post called “Object-level contemporary political discussion open thread”. I wouldn’t do this because I expected a successful resolution, I would do it to contain the mess somewhere away from meta-level discussion.
I am comfortable with saying that my post is anti-post truth politics. I think most LWs would agree that Trump relies more on post-truth tactics than other politicians. Note that I also called out Democrats for doing so as well.
I agree with this view. His abuse is more blase, that’s definitely true.
Brash man with a working-class NYC disposition: “Obama literally founded ISIS” or “Obama is secretly a Muslim”
Sensible people everywhere recoil and roll their eyes. Understanding why that’s absurd is pretty easy. The people who make those arguments aren’t exactly an intellectual class, and currently lack an intellectual ‘ruling caste.’
Refined person with an articulate tone of voice, and an Ivy league law degree: “Women are oppressed everywhere, and currently make 70 cents on the dollar of what a man makes.”
Not horrific, stated by a well-educated person. Sounds reasonable, based on ‘real research.’ Comes from a sense of seemingly genuine concern and outrage for an injustice.
I used to take the stance that the first was much worse, as it is more brash and shameless. I’m not sure anymore how to measure these two against each other. I have, absolutely without a doubt, been mind-killed on this specific topic, because I personally hate charlatan lawyers who think they have the right to tell me how to live my life.
Women are oppressed everywhere, and currently make 70 cents on the dollar of what a man makes.” based on ‘real research.’
And most disgusting of all, probably doesn’t get counted as a lie. This is the problem I have with Gleb’s claim about Trump lying more—the SJWs have found ways of lying that are not technically lies.
is it the post-truth world where true facts are lies because of reasons?
The false statement is “… therefore to be fair we should multiply every woman’s wage by 10⁄7.” Instead of something like “… so to promote equality we should stop discouraging fort grade girls from studying math.”
Those look like not-even-false claims because they almost are.
Agree that the attempts to rid academia of conservatives are bad.
Can you be comfortable saying that Trump lies more often, and more intensely, than prominent liberal politicians; usually does not back away from lies when called out; slams the credibility of those who call him out on lies; focuses on appealing to emotions over facts; tends to avoid providing evidence for assertions (such as that Russia was not behind the hack), etc.? This is what is meant by post-truth in Oxford Dictionary definition of this term.
The problem I have is a measure problem. How are we measuring lies? If I say that whales are fish and then I say all birds can fly, and you say the holocaust didn’t happen, that’s 2 for me and 1 for you so I’m a worse liar?
relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief
Note, the implicit inference that such circumstances are more common now than in the past, when this is almost certainly not true.
I didn’t realise the term “post truth” had a precise, official meaning? Anyway I would still say there is a bit of an issue measuring lies, but I definitely concede the point that Donald is very, very far from a truth teller.
Can you be comfortable saying that Trump lies more often, and more intensely, than prominent liberal politicians
I’m not sure about The_Jaded_One, he seems to be willing to assert false things under peer pressure. However, that statement is in fact false. Where by “false” I mean it doesn’t correspond to mapping to external observable reality. Specifically, I mean that Trump’s statements tend to map to reality better than those of liberal politicians.
I would only agree that every major political party uses post-truth rhetorical methods and it is sad that each of them does. If you want to propose a unit of measurement for truthiness I’d consider comparing them.
I am comfortable with saying that my post is anti-post truth politics. I think most LWs would agree that Trump relies more on post-truth tactics than other politicians.
Are you comfortable providing actual evidence for the claim that “Trump relies more on post-truth tactics than other politicians” or are you trying to argue for an epistemology of truth based on whatever the consensus by “experts” is?
I think it suffices to say that Gleb_Tsipursky has something of an anti-Trump political angle in the post (which may or may not be objectively correct)
It is quite important to note here that “an anti-Trump political angle” need not be pro-Obama or pro-Hillary. We should be careful not to fall for the all-too-common trap of hyper-factionalization and ‘politics as usual’!
Your document reads much more like “Rational Politics for Liberals.” That’s not necessarily a bad thing, but it’s really clear that you tacitly oppose lots of dissident/alternative/reactionary right views. I’m pretty sympathetic to what you’re trying to do, but I see it more as a concerted effort for thoughtful and rational discussion of how to solve the issues of alternative and neoreactionary right beliefs.
I don’t think any level of rational calculations of per person terrorism risk will change the 20-50% of Americans who don’t want Muslim immigration. Terrorism is the nicest and cleanest, discrete and emotionally actionable issue to focus around. The impression I get is more that many people prefer or like their own culture, and want to live in a culture with homogenous groups of people. There is at least enough research on the fact that homogenous groups are higher-trust and safer, to make the preference to exclude sufficiently different people as something that could be rational, or debatable in a rationalist framework (disclosure: I don’t think people are necessarily irrational for having ingroup preferences or outsider-anxiety).
I have spent a lot of time trying to think about how we can reconcile these worldviews between the in-group preference and the more universalist/progressive political model. It’s really hard, maybe impossible. Right now the intellectuals of each group (read: not the meme spammers) can’t even honestly discuss the issues. Their views are so beyond the pale of one another, that they can’t even find a shared platform to say a thing. Maybe one goal would be to get the intellectuals of both groups to take a stand against pointless meme arguments.
I’m not a huge fan of Ezra Klein, but his interview with Tyler Cowen gets at this (http://marginalrevolution.com/marginalrevolution/2016/10/conversation-ezra-klein-2.html).
KLEIN: I strongly agree. We do not have a language for demographic anxiety that is not a language that is about racism. And we need one. I really believe this, and I believe it’s been a problem, particularly this year. It is clear, the evidence is clear. Donald Trump is not about “economic anxiety.”
I think a true rational politics project would work on developing this ‘language for demographic anxiety.’ If we can imagine a slightly better pundit news discussion:
Interviewer: We have Guest1 and Guest2. Guest1 prefers a more homogenous culture, which he argues is higher trust. Guest2 believes that the evolution of our culture as we know it has succeeded because we bring in different groups of people.
Guest 1: Great to be here, I have lots of respect for Guest2, but I think we need to focus on bringing highly educated people who match on our core cultural dimensions.
Guest 2: There is more than simply education, over time we have seen [previous groups] slowly absorb into America, and add their own cultural strategies into our melting pot.
...or something like that. Right now that conversation cannot happen. Or at least it can’t happen without faux and real outrage, accusations of racism, and so forth. And in saying all this, it’s probably clear where my political preferences rest. It’s a testament to the challenge of this endeavor that any individuals model of what a ‘Rational Politics Project’ would look like secretly embeds the political views of its author.
The reason why culturally homogenous groups are higher trust is racism. The discussion from both sides needs to be about bad things, and racism is not infinitely bad or even any more inherently bad than inequality is.
...you [Gleb] tacitly oppose lots of dissident/alternative/reactionary right views
For what it’s worth, I think “dissident/alternative/reactionary” activists have a lot to learn still about what politics is actually like in the real world. Deliberation and compromise are critically important in real-world politicking, while writing obscure theoretical essays ala Moldbug and other ‘neo-reactionary’ thinkers is a lot less relevant. (Of course, this failure mode is by no means unique to neo-reactionaries—plenty of radical leftists do the same thing!) And I don’t think that even Donald Trump and his ‘alt-right’ supporters, who are obviously a lot closer to wielding real power, can escape this need for finding good compromises. (Especially in the longer run.)
… KLEIN: ”...It is clear, the evidence is clear. Donald Trump is not about ‘economic anxiety.’ ” …
Nitpick—a lot of people really want to believe this. ISTM that they do not care so much about understanding Donald Trump or his constituency, and they’re still trying to whitewash the blatant strategic mistakes of the Hillary Clinton campaign.
I see the situation right now as more liberals being closer to rational thinking than more conservatives, but it hasn’t been the case in the past. I don’t know how this document would read if more conservatives were closer to rational thinking.
Regarding the Muslim issue, you might want to check out the radio interview I linked in the document. It shows very clearly how I got a conservative talk show host to update toward being nicer to Muslims.
If you’re interested in participating in this project, email me at gleb [at] intentionalinsights [dot] org
...an explicitly non-partisan effort to gather thoughtful citizens of all political stripes devoted to spreading rational thinking and wise decision-making in the political arena, We see the lack of these practices as one of the worst problems for our global society in terms of how important, neglected, and tractable it is.
“Rational politics” seems like too narrow a focus to be successful in real-world politics. You should consider expanding this project from mere ‘rationality’ to encompass the broader notion of political virtues, as famously identified by political scientist Bernard Crick in his well-known book In Defence of Politics, which is often assigned as required reading in intro poli-sci courses.
In particular, as most people will know, “rationalism” in politics has some unfortunate connotations of ivory-tower over-intellectualism and disregard for most real-world issues and dynamics. Avoiding this impression, however misguided in this particular case, would seem to be of critical importance to any such project. You can view this as a simple, politically-grounded argument for why a broader focus on widely-recognized political virtues would in fact be highly desirable.
I hear you about “rationalism in politics.” The public-facing aspect of this project will be using terms like “post-lies movement” and so on. We’re using “Rational Politics” as the internal and provisional name for now, while we are gathering allies and spreading word about the project rather than doing much public outreach.
The public-facing aspect of this project will be using terms like “post-lies movement” and so on.
Good call—this will give folks a much better idea of what the project is actually about.
(Of course, ordinary sloppy thinking is just as problematic as overt “lies”, and I still think your project could usefully expand to encompass other facets of “wise decision-making” in a political context. There are many other “dark patterns”—particularly, black-and-white rhetoric that blatantly rejects any sort of compromise or adaptation as possible virtues—which are just as foolish and dangerous in practice (and there’s basically no controversy that this is the case, at least in the abstract). Again, Donald Trump’s campaign provides the best example of this as of late, but we’ve seen similar rhetoric from the “left” in the past—and some people would even say that the current campaign was no exception!)
We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it’s always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.
FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that’s more complex, and it’s important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I’d be happy.
So the two fundamental orientations of this project are utilitarianism and anti-lies.
Utilitarianism is nice of course, but since you’re operating in a political context here, it’s important to go for a politically-mindful variety of utilitarianism, that treats other people’s existing political stances as representational, or at least as useful evidence for what they actually care about. Virtues like adaptation, compromise, conciliation—even humor, sometimes—can be seen as ways to operationalize this sort of utilitarianism in practice—and also promote it to average folks who generally don’t know what “utilitarianism” is actually about!
This is probably too complex to hash out in comments—lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you’re interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org
No worries—I trust you to get the strategic/tactical side right, and it’s quite promising to see that you’re aware of these issues as well. I now think that this can be a very promising project, since you’re clearly evading the obvious pitfalls I was concerned about when I read the initial announcement!
In particular, as most people will know, “rationalism” in politics has some unfortunate connotations of ivory-tower over-intellectualism and disregard for most real-world issues and dynamics. Avoiding this impression, however misguided in this particular case,
Is it in fact misguided? Certainly looking at the OP the impression appears to be correct.
What are you actually planning to do that isn’t already being done? “Fake news” is already the biggest story of the year, but the Dunning-Kruger effect is ensuring that nothing actually changes.
While some may dismiss the possibility of the electorate becoming more rational in its political decision-making, the idea that citizens are inherently irrational is a myth. Research shows that people can train themselves to be more rational—more accurately evaluating reality and, thus, making wise decisions. However, this can only happen if people are motivated to put the time and effort into doing so. It is much easier to get people to agree that truth in politics is important than to get people to act in accordance with their stated agreement to this principle when doing so takes the cognitive effort of adopting new habits. These include systematically fact-checking political information, welcoming learning new information that goes against their current perspective and updating their beliefs, and many others.
How are you going to motivate people to learn that they’re wrong about everything? People who believe that Obama is a secret muslim or that he took some time off to start ISIS don’t want to know the truth.
This is described in the “How Is This Project Different From Others Trying To Do Somewhat Similar Things?” and “Do You Have Any Evidence That This Will Work?” sections in the document linked above—here’s the link for convenience.
It seems like you are trying to create a new partisan political party. To skip unrelated drama I’ll refer to it as Evidence Based Politics Proponents, or EBPP, because that summarizes what I think you want and taboos the things that I think people find objectionable about the name.
The current two-party system has evolved over a hundred or so federal elections to approach at least a local maximum in their strategy. Their strategy is likely significantly better than the typical one; in particular the winning strategy is expected to be significantly better than intuitive strategies that are not incorporated into the winning strategy. I think that the values and methods you are proposing for the EBPP are intuitive and have been tried repeatedly, and have failed to ever take hold.
Why do you think that trying for honesty and rational decision making will be significantly more effective at winning elections or accomplishing goals in 2018 than it has been from 1791-present?
Do you think it hasn’t been tried before, or do you think that you have a better plan thanThe Coalition for Evidence-Based Policy, the group currently considering as “top tier” interventions that result in up to 14% of students reporting that they had never smoked (the largest effect size of various studies; typical values were between 5-10% less of the experimental group reported recent drug use or heavy intoxication than the control group, after a long period. With their strongest recommendations being for effect sizes that large, I can’t image how they would tackle fiscal policy recommendations and other policies that have a large expected value but are currently managed by ideological and tribal forces.
The current facts on the political reality is that once a domain of science gets close to suggesting political policy, political control of the science becomes certain. For example, regardless of what the facts are about climate trends, the conclusions drawn by “independent” groups have an implied political policy which correlates strongly with the desired policy of their funding agency. The actual facts are unavailable for public perusal partly because they are arcane and partly because they are obfuscated. The rational politcs strategy would be determining how desirable each climate is and how desirable each level of CO2 production is and how CO2 production maps to climate to find the optimum balance between the two; that optimum strategy cannot happen when one camp is focusing on ideological goals of zero net emissions for reasons unrelated to climate and another camp is demanding zero restrictions based on ideaology.
Your counterargument in no way engages with the most important parts of, say, Bryan Caplan’s argument for voter irrationality, which is that on the margin voters gain nothing by voting well and pay nothing for voting according to their emotions, whereas voting well requires costly investments of time and attention.
It’s possible that people could, by investing more energy and time into the voting process, vote better. But why would we believe that’s the right thing for people to do, on an individual level?
Yup, agreed that it may well be not worthwhile for voters who vote for reasons that are not oriented toward the most social good to vote rationally. This is why I say this is a project informed by EA values—it comes from the perspective that voting is like donating thousands of dollars to charity. For those who are purely self-interested, it’s really not rational to vote.
So to be clear, it’s not meant to target those who don’t care about the public good—just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and some of those folks would genuinely want the most good. The project is not meant to reach all, in other words—just that select slice.
I don’t even think most voters behave irrationality on many subjects.
They tend to form “blocks” or “tribes”, and from the point of view of the tribe, their voting can be seen as rational.
Of course I definitely think the world could be run a lot better, so some of what Caplan says is true.
But this is all still a problem for Gleb, because doing what’s best for your tribe is not necessarily being a rational truthseeker.
I would like to caution commenters that it seems to me like the comment section of this post is at risk of becoming an object-level political argument of the sort specifically proscribed by the rules. This may be amplified by the temporary disablement of downvoting because it means that controversial comments—those which would normally have a high sum, but a small difference, of upvotes+downvotes—will tend to rise to the top.
I would suggest spending extra time looking for insightful comments that are not object-level political arguments to upvote, and consciously avoiding upvoting object-level political arguments that you agree with.
And the (false) object level political statements in the OP aren’t proscribed by the rules?
Well I mean let him have his jab, the point of the site is to rise above too much unproductive object-level stuff. For an unproductive debate to be avoided, someone usually has to rise above it. Let that person be you.
Attempting to “rise above” object level discussion does not get you closer to truth. It means the conversation gets dominated by charlatans like Gab.
I wouldn’t go ad-hominem against Gleb_Tsipursky, ad-hominem is never a good idea. He seems to me to be an earnest debater with a lot to offer, but even if he wasn’t I would still attack his arguments rather than attack the man.
I agree that for the vast majority of cases stick to the object level rather than the psychology. But in this particular case, see this about Gleb and his organisation (detailing his dubious ethical practices and general misinformation): http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/
It’s still ad-hominem to bring it up as an argument, and probably counterproductive to try and follow the guy around bringing up his mistakes everywhere he goes. Unless he is doing something directly connected with those criticisms.
Those criticisms are extremely relevant here. We aren’t having an abstract debate about rationality in politics, we are commenting on a post which announced a new project led by Gleb Tsipursky to try to bring rationality to politics. If you want to predict what this project is likely to end up doing, or how successful it will be, then one of the most relevant pieces of information that you have is Gleb’s track record.
This seems perfectly legitimate to post as a top level comment.
EDIT: though you will notice that the criticism by Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, and Claire Zabel is very measured and I don’t see any insults or even any judgements about Gleb as a person in there. I would take a leaf out of their book.
Well, right here I’m not debating Gleb, I’m debating you.
Really, to me he looks like a standard cargo cult rationalist of the kind that Rational Wiki is full of.
Yeah I mean cargo cult rationality is definitely a risk. But still, it’s better to sink the argument than chase the guy, and as I said in a ‘no political debate’ setting, someone has to rise above it a bit.
I have actually been thinking about posting about politics here, I think there are interesting things going on in our politics but people making personal attacks against other commenters makes it harder to have a good debate.
Didn’t you also just say you don’t want object level political discussions?
Well if you have to choose between attacking someone’s object-level arguments about politics or attacking their person, I would say the latter is a greater evil even when the topic is controversial.
In the comments to this post I would avoid both, it’s reasonable to agree to disagree or just take the argument to PMs or something, or maybe have a special ‘politics’ thread. I mean you can even say “I disagree with X and took it to PMs” to avoid giving the impression that his assertion was unchallenged.
What would be the point of that. To convince the other guy to see his mistakes? That only works if the person you’re debating is well meaning and exceptionally rational.
Otherwise, the point of debating in public is so that observers can see for themselves who’s being rational.
Yeah but that kind of debating tends to massively incentivize techniques for sophistry, leads to long pointless debates that take up time and yield no new knowledge. Here on LW we aim higher than that, and that is why there are norms to try and prevent it.
Meta-level discussion is never intended to “rise above” ground-level politics—that is indeed an illusion as you say. Rather, it’s intended to side-step the former temporarily, while still being useful by creating better frameworks for deliberation, mediation and similar good practices. It’s very important to understand this—any talk of “rising above” the actual, real-world issues is illusory and potentially dangerous.
The problem is that it’s frequently used as an attempt to reach conclusions while side-stepping the whole messy “looking at the facts on the ground” thing.
I am suggesting that people comment and vote a certain way; I don’t have control over the moderators, who would be the ones with the power to do anything about the post itself. Let me explicitly state that I think people probably shouldn’t upvote this post, even if it contains things they like, if they think (as I do) that it will promote and increase object-level political discussion. Unfortunately, because downvoting is disabled (for unrelated good reasons I support), it’s hard for me and others to formally note disapproval (by downvoting) of the politics, so I’m asking other people to try to avoid noting (in the upvote sense) approval of it unless they really mean it.
I am a moderator here. I have been contemplating removing this post for most of the time since I saw it. I am still indecisive. Feel free to comment here or PM me with your preferences.
What is our policy on politics?
unspoken. If I had to name it, I would be happy with, “can you talk about it elsewhere”.
In my opinion the LW policy is to steer away from politics when it obstructs discussion.
I wish I had a strong opinion to offer on this question. I think there are probably good arguments both for and against removing the post. I wish you luck in your difficult role.
I think object-level political discussion is undesirable for LW (at least at this time), but that’s quite tangential to this post, which is mostly about meta-level issues—and I consider these to be quite important. I think the post should stay.
I think this post is an interesting idea, but I feel like it’s built on some shaky assumptions.
As an instructive example, suppose (and let me DISCLAIM THAT I AM NOT ENDORSING THIS VIEW) that you are a racist, believe that people of certain races are objectively bad for a country because they have lower IQ and are more likely to commit crimes and furthermore you want fewer (or none) of them in your country, and you want to use political means to achieve this. Then Obama is elected. Now you have two options:
1) forthrightly and honestly state that you are a racist, form a pro-racism political group, perhaps even call it the “Racist Party”, and campaign to impeach Omaba for being black.
2) make some shit up about Obama being a member of ISIS and claim he wasn’t born in the USA.
Before we go on, NOTE AGAIN THAT I AM NOT ENDORSING THIS VIEW I AM ONLY USING IT AS AN EXAMPLE.
So, it seems fairly clear that option (1) is not really the rational choice. You would be doxxed, hounded down on social media, almost certainly lose your job. Others who agreed with you wouldn’t support you, because they would fear that the doxxing and firing and public shaming would be directed against them. So you choose (2), you get involved in the “birther” movement or whatever.
Now Gleb comes along and says “Hey Guys! Exciting news from the world of theoretical rationality!”
Or to take a different example, suppose you think that it would kind of suck for your country to lose all its industry in order to fight global warming, whilst other countries continue to pollute, and suppose you also really don’t want to lose your job at a factory and have your life ruined right now for the benefit of anonymous foreigners 20-70 years in the future. You have two choices:
1) Forthrightly and honestly state that you don’t really care enough about some marginal third-world foreigner getting killed in a slightly-more-severe-than-average drought to lose your job and livelihood for them right now. Make a political party called the “Procrastinate the Environment Party”, and argue that Global Warming is totally legit, but we can probably get away with procrastinating it. Also limiting CO2 emissions kind of like a game of chicken with other countries, so from a game theory point of view it makes sense to keep polluting and see whether they crack first.
2) make up some shit about Global Warming being a liberal hoax
Or another example, suppose you belief that Global Warming is a hoax but questioning the word of certified expert scientists(tm) is not allowed in your social circle, so you come up with an elaborate meta-political explanation for why one can support the people calling Global Warming a hoax without believing it.
lol, very meta
Yup, agreed that it may well be not wise for those who have racist beliefs to be open about them. The same applies to the global warming stuff.
This is why I say this is a project informed by EA values—it comes from the perspective that voting is like donating thousands of dollars to charity and that voters care about the public good. It’s not meant to target those who don’t care about the public good—just those mistaken about what is the best way to achieve the public good. For instance, plenty of voters are mistaken about the state of reality, and some of those folks would genuinely want the most good. The project is not meant to reach all, in other words—just that select slice.
In particular, you’re not interested in reaching the voters who don’t want say Muslim migrants raping and occasionally murdering girls in their neighborhoods. Good to know.
Well, there is a more serious flaw than that particular issue: if you reach out to a very small slice of humans in our world and persuade them that they should be more rational in politics, politics will not get more rational. You have to appeal to everyone or almost everyone.
So, for example, people who read Breitbart have to be on board, as well as people who read the guardian and the daily kos.
I feel like these requirements are kind of contradictory. What if a lot of people are selfish, racist (in a broad sense) and want to procrastinate global warming? Are we saying that “rational” political debate benefits them, or that it doesn’t? If it doesn’t benefit them then why should they be on board?
Are we saying that you have to be a globalist effective altruist who puts the needs of distant strangers above those of their own families to benefit from rational politics? Very few people have values like that! IIRC even Peter Singer struggled with that!
Do you have to care about the public good of the whole world, or is it OK if you only care about the public good of your tribe/country/race?
I’m talking about prioritizing the good of the country as a whole, not necessarily distant strangers—although in my personal value stance, that would be nice. Like I said, it’s an EA project :-)
A political group composed only of people who prioritize the good of the country over their own subtribe or self will lack the support needed to flourish.
It’s not that people disagree or don’t know about the object level facts. It’s that people are actively fighting to gain relative advantage over others. And that is a cultural problem, not a political one.
I realize that you have disclaimed any endorsement of this view, but some people might accidentally get the wrong idea of what “lower IQ” and “more likely to commit crimes” actually mean in these contexts, in the real world. Men have a “higher propensity to commit crimes” compared to women, and we don’t call them “objectively bad” for this. (Well, maybe some radfems do, actually. Not really sure about that!) People with a mere BA-level education have “lower IQ” compared to people with multiple PhD’s, and similarly, we don’t think that BA-holders are bad.
In other words, to even treat this as if it were a colorable argument reveals a basic failure of rationality. I wouldn’t care about this usually, but this whole post is about making politics more rational, and pointing out these things seems like a good place to start.
I looked for “Women are better than men” on Google, and I found a debate at debate.org debate which cited less crime as a reason that women are better then men.
EDIT: Also Psychology Today
Um, either the folks at debate.org and Psychology Today are secretly radfems, or I need to seriously update here. OK, I’m definitely re-assessing how common this line of argument (“Group X should be regarded as better than/superior to group Y, because of a slight difference in the average level of some psychological trait, such as propensity to commit crimes”) is in the real world. Thanks!
You earn Lesswrong gold for updating!
We definitely need Lesswrong gold.
On the other hand we do have a “violence against women” act, and a whole section of the justice department dedicated to crimes committed by men against women.
I’m not quite sure I follow. What’s the failure of rationality here?
So the point I am making is that we humans have set up a political system where “The Racist Party” and the “Procrastinate the Environment Party” are super-duper not allowed. If the incentives against espousing the view that you actually hold are much more severe than the incentives against trying to mess with epistemology so that you can take the action you wanted without having to reveal your “forbidden” view, then it makes sense to mess with epistemology.
And the reason that we can’t have a world where people lose their jobs and get doxxed and publicly shamed for breaking epistemological rules is that basically everyone does it. Dark side epistemology is a convergent instrumental goal in politics, so it’s really hard to build a coalition against it. And if you did have a system for enforcing epistemological standards, the first thing everyone would think is “Wow that’s great, how can I subvert this system to further my object-level preferences?”
The fact that your siting fake news sites like politifact when describing this project does not bode well for it.
Um, Breitbart news is hardly a credible site to use to attack Politifact. Besides, that citations also had Washington Post and The New York Times—do you call them fake news as well?
Yes, as I mentioned in my other comment. Now care to explain why you cited them in an article supposedly devoted to opposing fake news. Or is your definition of “fake news”, news that contradicts something written in an “official true news source” as opposed to something that contradicts reality?
I think the real problem here is that the pejorative term “Fake news” is succumbing to an effect I mentioned down in the comments earlier.
Now that’s not to say that there isn’t a real phenomenon of inaccurate or totally fabricated news, but I think the term “Fake News” (“Fake News Outlet”) particularly lends itself to being used as a bludgeon to attack the other side, because it’s really satisfying to catch one inaccurate story on a site you disagree with and thereby write the rest of the content off as “Fake News”.
“succumbing”? It succumbed to that effect decades ago.
Um, fake news sources, like the New York Times have existed for at least a century and probably for as long as news existed. If anything is different in 2016 it’s that it’s becoming easier to check them and find out that their false.
Um, the candidate of lies and manipulation lost.
Um, I thick your confused here. It was the Clinton campaign that was doing things like encouraging BLM with misleading statistics and outright lies.
..
It’s probably counterproductive to discuss object level politics. I kind of half-agree with you actually, but still, I can imagine this comment thread turning into an unproductive one.
I think it suffices to say that Gleb_Tsipursky has something of an anti-Trump political angle in the post (which may or may not be objectively correct), and agree to disagree as much as is possible on the object level.
So how do you propose to make politics more rational and better correspond to truth without discussing which policies are in fact rational and which political statements are true?
And yet he claimed his project is “non-partisan”.
Well, I mean you could argue that he is looking for a Trump supporter who likes good epistemology to join his team so that their biases would cancel out?
But you’re right, it’s shaky ground.
I’d probably start by making a new post called “Object-level contemporary political discussion open thread”. I wouldn’t do this because I expected a successful resolution, I would do it to contain the mess somewhere away from meta-level discussion.
I am comfortable with saying that my post is anti-post truth politics. I think most LWs would agree that Trump relies more on post-truth tactics than other politicians. Note that I also called out Democrats for doing so as well.
Personally I think that Trump abuses epistemology in different ways than the left/PC establishment, rather than more.
For example, how much weight do we put on very successful liberal attempts to rid social-science academia of conservatives? Is it worse to lie about global warming, or to attempt to purge a universities of all academics who are conservative, so that every paper that comes out of academia has a liberal bias?
What is the most common political affiliation of people who work for, for example, the IPCC? Probably quite liberal.
I agree with this view. His abuse is more blase, that’s definitely true.
Brash man with a working-class NYC disposition: “Obama literally founded ISIS” or “Obama is secretly a Muslim”
Sensible people everywhere recoil and roll their eyes. Understanding why that’s absurd is pretty easy. The people who make those arguments aren’t exactly an intellectual class, and currently lack an intellectual ‘ruling caste.’
Refined person with an articulate tone of voice, and an Ivy league law degree: “Women are oppressed everywhere, and currently make 70 cents on the dollar of what a man makes.”
Not horrific, stated by a well-educated person. Sounds reasonable, based on ‘real research.’ Comes from a sense of seemingly genuine concern and outrage for an injustice.
I used to take the stance that the first was much worse, as it is more brash and shameless. I’m not sure anymore how to measure these two against each other. I have, absolutely without a doubt, been mind-killed on this specific topic, because I personally hate charlatan lawyers who think they have the right to tell me how to live my life.
And most disgusting of all, probably doesn’t get counted as a lie. This is the problem I have with Gleb’s claim about Trump lying more—the SJWs have found ways of lying that are not technically lies.
is it the post-truth world where true facts are lies because of reasons?
The false statement is “… therefore to be fair we should multiply every woman’s wage by 10⁄7.” Instead of something like “… so to promote equality we should stop discouraging fort grade girls from studying math.”
Those look like not-even-false claims because they almost are.
...
This smells like a nice motte-and-bailey to me, but I don’t really want to prosecute the debate here.
Agree that the attempts to rid academia of conservatives are bad.
Can you be comfortable saying that Trump lies more often, and more intensely, than prominent liberal politicians; usually does not back away from lies when called out; slams the credibility of those who call him out on lies; focuses on appealing to emotions over facts; tends to avoid providing evidence for assertions (such as that Russia was not behind the hack), etc.? This is what is meant by post-truth in Oxford Dictionary definition of this term.
The problem I have is a measure problem. How are we measuring lies? If I say that whales are fish and then I say all birds can fly, and you say the holocaust didn’t happen, that’s 2 for me and 1 for you so I’m a worse liar?
I’m going with the official definition of post-truth here, and am comfortable standing by it.
Your linked definition of ‘post-truth’ is:
Note, the implicit inference that such circumstances are more common now than in the past, when this is almost certainly not true.
I didn’t realise the term “post truth” had a precise, official meaning? Anyway I would still say there is a bit of an issue measuring lies, but I definitely concede the point that Donald is very, very far from a truth teller.
Agreed with the issues around measuring lies, and noting the concession of the point—LW gold to you for highlighting the concession.
I’m not sure about The_Jaded_One, he seems to be willing to assert false things under peer pressure. However, that statement is in fact false. Where by “false” I mean it doesn’t correspond to mapping to external observable reality. Specifically, I mean that Trump’s statements tend to map to reality better than those of liberal politicians.
At this point, I’m finished engaging with you, since you’re clearly not making statements based on reality. Good luck with growing more rational!
I would only agree that every major political party uses post-truth rhetorical methods and it is sad that each of them does. If you want to propose a unit of measurement for truthiness I’d consider comparing them.
Are you comfortable providing actual evidence for the claim that “Trump relies more on post-truth tactics than other politicians” or are you trying to argue for an epistemology of truth based on whatever the consensus by “experts” is?
It is quite important to note here that “an anti-Trump political angle” need not be pro-Obama or pro-Hillary. We should be careful not to fall for the all-too-common trap of hyper-factionalization and ‘politics as usual’!
Calling the New York Times a fake news source is racist since the paper is controlled by the Mexican oligarchy.
Nice, didn’t know that—thanks for pointing it out! Updated slightly on credibility of NYTimes on this basis.
Your document reads much more like “Rational Politics for Liberals.” That’s not necessarily a bad thing, but it’s really clear that you tacitly oppose lots of dissident/alternative/reactionary right views. I’m pretty sympathetic to what you’re trying to do, but I see it more as a concerted effort for thoughtful and rational discussion of how to solve the issues of alternative and neoreactionary right beliefs.
I don’t think any level of rational calculations of per person terrorism risk will change the 20-50% of Americans who don’t want Muslim immigration. Terrorism is the nicest and cleanest, discrete and emotionally actionable issue to focus around. The impression I get is more that many people prefer or like their own culture, and want to live in a culture with homogenous groups of people. There is at least enough research on the fact that homogenous groups are higher-trust and safer, to make the preference to exclude sufficiently different people as something that could be rational, or debatable in a rationalist framework (disclosure: I don’t think people are necessarily irrational for having ingroup preferences or outsider-anxiety).
I have spent a lot of time trying to think about how we can reconcile these worldviews between the in-group preference and the more universalist/progressive political model. It’s really hard, maybe impossible. Right now the intellectuals of each group (read: not the meme spammers) can’t even honestly discuss the issues. Their views are so beyond the pale of one another, that they can’t even find a shared platform to say a thing. Maybe one goal would be to get the intellectuals of both groups to take a stand against pointless meme arguments.
I’m not a huge fan of Ezra Klein, but his interview with Tyler Cowen gets at this (http://marginalrevolution.com/marginalrevolution/2016/10/conversation-ezra-klein-2.html). KLEIN: I strongly agree. We do not have a language for demographic anxiety that is not a language that is about racism. And we need one. I really believe this, and I believe it’s been a problem, particularly this year. It is clear, the evidence is clear. Donald Trump is not about “economic anxiety.”
I think a true rational politics project would work on developing this ‘language for demographic anxiety.’ If we can imagine a slightly better pundit news discussion:
Interviewer: We have Guest1 and Guest2. Guest1 prefers a more homogenous culture, which he argues is higher trust. Guest2 believes that the evolution of our culture as we know it has succeeded because we bring in different groups of people. Guest 1: Great to be here, I have lots of respect for Guest2, but I think we need to focus on bringing highly educated people who match on our core cultural dimensions. Guest 2: There is more than simply education, over time we have seen [previous groups] slowly absorb into America, and add their own cultural strategies into our melting pot.
...or something like that. Right now that conversation cannot happen. Or at least it can’t happen without faux and real outrage, accusations of racism, and so forth. And in saying all this, it’s probably clear where my political preferences rest. It’s a testament to the challenge of this endeavor that any individuals model of what a ‘Rational Politics Project’ would look like secretly embeds the political views of its author.
The reason why culturally homogenous groups are higher trust is racism. The discussion from both sides needs to be about bad things, and racism is not infinitely bad or even any more inherently bad than inequality is.
For what it’s worth, I think “dissident/alternative/reactionary” activists have a lot to learn still about what politics is actually like in the real world. Deliberation and compromise are critically important in real-world politicking, while writing obscure theoretical essays ala Moldbug and other ‘neo-reactionary’ thinkers is a lot less relevant. (Of course, this failure mode is by no means unique to neo-reactionaries—plenty of radical leftists do the same thing!) And I don’t think that even Donald Trump and his ‘alt-right’ supporters, who are obviously a lot closer to wielding real power, can escape this need for finding good compromises. (Especially in the longer run.)
Nitpick—a lot of people really want to believe this. ISTM that they do not care so much about understanding Donald Trump or his constituency, and they’re still trying to whitewash the blatant strategic mistakes of the Hillary Clinton campaign.
I see the situation right now as more liberals being closer to rational thinking than more conservatives, but it hasn’t been the case in the past. I don’t know how this document would read if more conservatives were closer to rational thinking.
Regarding the Muslim issue, you might want to check out the radio interview I linked in the document. It shows very clearly how I got a conservative talk show host to update toward being nicer to Muslims.
If you’re interested in participating in this project, email me at gleb [at] intentionalinsights [dot] org
“Rational politics” seems like too narrow a focus to be successful in real-world politics. You should consider expanding this project from mere ‘rationality’ to encompass the broader notion of political virtues, as famously identified by political scientist Bernard Crick in his well-known book In Defence of Politics, which is often assigned as required reading in intro poli-sci courses.
In particular, as most people will know, “rationalism” in politics has some unfortunate connotations of ivory-tower over-intellectualism and disregard for most real-world issues and dynamics. Avoiding this impression, however misguided in this particular case, would seem to be of critical importance to any such project. You can view this as a simple, politically-grounded argument for why a broader focus on widely-recognized political virtues would in fact be highly desirable.
I hear you about “rationalism in politics.” The public-facing aspect of this project will be using terms like “post-lies movement” and so on. We’re using “Rational Politics” as the internal and provisional name for now, while we are gathering allies and spreading word about the project rather than doing much public outreach.
Good call—this will give folks a much better idea of what the project is actually about.
(Of course, ordinary sloppy thinking is just as problematic as overt “lies”, and I still think your project could usefully expand to encompass other facets of “wise decision-making” in a political context. There are many other “dark patterns”—particularly, black-and-white rhetoric that blatantly rejects any sort of compromise or adaptation as possible virtues—which are just as foolish and dangerous in practice (and there’s basically no controversy that this is the case, at least in the abstract). Again, Donald Trump’s campaign provides the best example of this as of late, but we’ve seen similar rhetoric from the “left” in the past—and some people would even say that the current campaign was no exception!)
We chose the issue of lies specifically because it is something a bunch of people can get behind opposing, across the political spectrum. Otherwise, we have to choose political virtues, and it’s always a trade-off. So the two fundamental orientations of this project are utilitarianism and anti-lies.
FYI, we plan to tackle sloppy thinking too, as I did in this piece, but that’s more complex, and it’s important to start with simple messages first. Heck, if we can get people to realize the simple difference between truth and comfort, I’d be happy.
Utilitarianism is nice of course, but since you’re operating in a political context here, it’s important to go for a politically-mindful variety of utilitarianism, that treats other people’s existing political stances as representational, or at least as useful evidence for what they actually care about. Virtues like adaptation, compromise, conciliation—even humor, sometimes—can be seen as ways to operationalize this sort of utilitarianism in practice—and also promote it to average folks who generally don’t know what “utilitarianism” is actually about!
This is probably too complex to hash out in comments—lots of semantics issues and some strategic/tactical information that might be best to avoid discussing publicly. If you’re interested in getting involved in the project and want to chat on Skype, email me at gleb [at] intentionalinsights [dot] org
No worries—I trust you to get the strategic/tactical side right, and it’s quite promising to see that you’re aware of these issues as well. I now think that this can be a very promising project, since you’re clearly evading the obvious pitfalls I was concerned about when I read the initial announcement!
Thank you!
Is it in fact misguided? Certainly looking at the OP the impression appears to be correct.
What are you actually planning to do that isn’t already being done? “Fake news” is already the biggest story of the year, but the Dunning-Kruger effect is ensuring that nothing actually changes.
How are you going to motivate people to learn that they’re wrong about everything? People who believe that Obama is a secret muslim or that he took some time off to start ISIS don’t want to know the truth.
This is described in the “How Is This Project Different From Others Trying To Do Somewhat Similar Things?” and “Do You Have Any Evidence That This Will Work?” sections in the document linked above—here’s the link for convenience.
It seems like you are trying to create a new partisan political party. To skip unrelated drama I’ll refer to it as Evidence Based Politics Proponents, or EBPP, because that summarizes what I think you want and taboos the things that I think people find objectionable about the name.
The current two-party system has evolved over a hundred or so federal elections to approach at least a local maximum in their strategy. Their strategy is likely significantly better than the typical one; in particular the winning strategy is expected to be significantly better than intuitive strategies that are not incorporated into the winning strategy. I think that the values and methods you are proposing for the EBPP are intuitive and have been tried repeatedly, and have failed to ever take hold.
Why do you think that trying for honesty and rational decision making will be significantly more effective at winning elections or accomplishing goals in 2018 than it has been from 1791-present?
Do you think it hasn’t been tried before, or do you think that you have a better plan thanThe Coalition for Evidence-Based Policy, the group currently considering as “top tier” interventions that result in up to 14% of students reporting that they had never smoked (the largest effect size of various studies; typical values were between 5-10% less of the experimental group reported recent drug use or heavy intoxication than the control group, after a long period. With their strongest recommendations being for effect sizes that large, I can’t image how they would tackle fiscal policy recommendations and other policies that have a large expected value but are currently managed by ideological and tribal forces.
The current facts on the political reality is that once a domain of science gets close to suggesting political policy, political control of the science becomes certain. For example, regardless of what the facts are about climate trends, the conclusions drawn by “independent” groups have an implied political policy which correlates strongly with the desired policy of their funding agency. The actual facts are unavailable for public perusal partly because they are arcane and partly because they are obfuscated. The rational politcs strategy would be determining how desirable each climate is and how desirable each level of CO2 production is and how CO2 production maps to climate to find the optimum balance between the two; that optimum strategy cannot happen when one camp is focusing on ideological goals of zero net emissions for reasons unrelated to climate and another camp is demanding zero restrictions based on ideaology.
I’m declaring the discussion on this a failure, and moving this post to Gleb’s drafts.
So you admit that your motivated perception is more important than reality.